Long short-term memory (LSTM) is a type of powerful deep neural network that has been widely used in many sequence analysis and modeling applications. However, the large model size problem of LSTM networks make their practical deployment still very challenging, especially for the video recognition tasks that require high-dimensional input data. Aiming to overcome this limitation and fully unlock the potentials of LSTM models, in this paper we propose to perform algorithm and hardware co-design towards high-performance energy-efficient LSTM networks. At algorithm level, we propose to develop fully decomposed hierarchical Tucker (FDHT) structure-based LSTM, namely FDHT-LSTM, which enjoys ultra-low model complexity while still achieving high accuracy. In order to fully reap such attractive algorithmic benefit, we further develop the corresponding customized hardware architecture to support the efficient execution of the proposed FDHT-LSTM model. With the delicate design of memory access scheme, the complicated matrix transformation can be efficiently supported by the underlying hardware without any access conflict in an on-the-fly way. Our evaluation results show that both the proposed ultra-compact FDHT-LSTM models and the corresponding hardware accelerator achieve very high performance. Compared with the state-of-the-art compressed LSTM models, FDHT-LSTM enjoys both order-of-magnitude reduction in model size and significant accuracy improvement across different video recognition datasets. Meanwhile, compared with the state-of-the-art tensor decomposed model-oriented hardware TIE, our proposed FDHT-LSTM architecture achieves better performance in throughput, area efficiency and energy efficiency, respectively on LSTM-Youtube workload. For LSTM-UCF workload, our proposed design also outperforms TIE with higher throughput, higher energy efficiency and comparable area efficiency.
translated by 谷歌翻译
Model compression and model defense for deep neural networks (DNNs) have been extensively and individually studied. Considering the co-importance of model compactness and robustness in practical applications, several prior works have explored to improve the adversarial robustness of the sparse neural networks. However, the structured sparse models obtained by the exiting works suffer severe performance degradation for both benign and robust accuracy, thereby causing a challenging dilemma between robustness and structuredness of the compact DNNs. To address this problem, in this paper, we propose CSTAR, an efficient solution that can simultaneously impose the low-rankness-based Compactness, high STructuredness and high Adversarial Robustness on the target DNN models. By formulating the low-rankness and robustness requirement within the same framework and globally determining the ranks, the compressed DNNs can simultaneously achieve high compression performance and strong adversarial robustness. Evaluations for various DNN models on different datasets demonstrate the effectiveness of CSTAR. Compared with the state-of-the-art robust structured pruning methods, CSTAR shows consistently better performance. For instance, when compressing ResNet-18 on CIFAR-10, CSTAR can achieve up to 20.07% and 11.91% improvement for benign accuracy and robust accuracy, respectively. For compressing ResNet-18 with 16x compression ratio on Imagenet, CSTAR can obtain 8.58% benign accuracy gain and 4.27% robust accuracy gain compared to the existing robust structured pruning method.
translated by 谷歌翻译
由于NN模型的强大学习能力及其固有的高平行性,基于神经网络(NN)的方法已成为机器人运动计划的有吸引力的方法。尽管目前朝这个方向发展,但以直接和同时的方式对重要的顺序和空间信息的有效捕获和处理仍然相对较小。为了克服挑战并释放神经网络对运动计划任务的潜力,在本文中,我们提出了STP-NET,这是一个端到端的学习框架,可以充分提取并利用重要的时空信息来形成有效的神经信息运动计划者。通过将机器人的移动解释为视频剪辑,机器人运动计划被转换为视频预测任务,STP-NET可以在空间和时间上有效的方式执行。 STP-NET在不同的和看不见的环境之间进行了经验评估,表明,凭借近100%的准确性(又称成功率),STP-NET在计划速度和路径成本方面表现出非常有希望的性能。与现有的基于NN的运动计划者相比,STP-NET在2D随机森林,2D迷宫和3D随机森林环境中至少达到5倍,2.6倍和1.8倍的速度,速度较低。此外,STP-NET可以快速,同时计算多机手运动计划任务中的多个近乎最佳路径
translated by 谷歌翻译
联合学习(FL)可以培训全球模型,而无需共享存储在多个设备上的分散的原始数据以保护数据隐私。由于设备的能力多样化,FL框架难以解决Straggler效应和过时模型的问题。此外,数据异质性在FL训练过程中会导致全球模型的严重准确性降解。为了解决上述问题,我们提出了一个层次同步FL框架,即Fedhisyn。 Fedhisyn首先根据其计算能力将所有可​​用的设备簇分为少数类别。经过一定的本地培训间隔后,将不同类别培训的模型同时上传到中央服务器。在单个类别中,设备根据环形拓扑会相互传达局部更新的模型权重。随着环形拓扑中训练的效率更喜欢具有均匀资源的设备,基于计算能力的分类减轻了Straggler效应的影响。此外,多个类别的同步更新与单个类别中的设备通信的组合有助于解决数据异质性问题,同时达到高精度。我们评估了基于MNIST,EMNIST,CIFAR10和CIFAR100数据集的提议框架以及设备的不同异质设置。实验结果表明,在训练准确性和效率方面,Fedhisyn的表现优于六种基线方法,例如FedAvg,脚手架和Fedat。
translated by 谷歌翻译
数据估算已被广泛探索以解决缺失的数据问题。显着增加的不完整数据量使得归纳模型在许多现实生活中的计算上不可行。在本文中,我们提出了一个名为SCI的有效可扩展的估算系统,以显着加速在大规模不完整数据的准确性保证下进行可分解的生成对抗性归档模型的培训。 SCI包括两个模块,可差异的拒绝建模(DIM)和样本量估计(SSE)。 Dim利用新的遮蔽沉降角分歧功能,使任意生成的逆势归零模型可微分,而对于这种可分辨动的载体模型,SSE可以估计适当的样本大小,以确保用户指定的最终模型的借调准确性。在几个现实生活中的大规模数据集上进行了广泛的实验证明,我们的提出系统可以通过7.1倍加速生成的对抗性模型培训。使用大约7.6%的样本,SCIS在计算时间较短的情况下,使用最先进的估算方法产生竞争精度。
translated by 谷歌翻译
由于对个人医疗保健和大流行而越来越关注,E-Health的普及是增殖的。如今,通过机器学习模型对医学诊断的增强在电子健康分析的许多方面都非常有效。然而,在经典的基于云/集中的电子健康范式范式中,所有数据都将集中存储在服务器上,以促进模型培训,这不可避免地引发隐私问题和高延迟。提出了分布式解决方案,如分散的随机梯度下降(D-SGD),以基于个人设备提供安全和及时的诊断结果。然而,D-SGD等方法受梯度消失问题,通常在早期训练阶段缓慢进行,从而阻碍培训的有效性和效率。此外,现有方法容易发生偏向具有密集数据的用户的学习模型,在为少数群体提供电子健康分析时损害公平性。在本文中,我们提出了一个分散的块坐标血统(D-BCD)学习框架,可以更好地优化分布在用于电子健康分析的分散设备上的深度神经网络的模型。三个真实数据集的基准测试实验说明了我们提出的D-BCD的有效性和实用性,其中额外的仿真研究展示了D-BCD在现实生活中的强有力的适用性。
translated by 谷歌翻译
由于其实现的实际加速,过滤器修剪已广泛用于神经网络压缩。迄今为止,大多数现有滤波器修剪工作探索过滤器通过使用通道内信息的重要性。在本文中,从频道间透视开始,我们建议使用信道独立性进行有效的滤波器修剪,该指标测量不同特征映射之间的相关性。较少独立的特征映射被解释为包含较少有用的信息$ / $知识,因此可以修剪其相应的滤波器而不会影响模型容量。我们在过滤器修剪的背景下系统地调查了渠道独立性的量化度量,测量方案和敏感性$ / $可靠性。我们对各种数据集不同模型的评估结果显示了我们方法的卓越性能。值得注意的是,在CIFAR-10数据集上,我们的解决方案可以分别为基线Resnet-56和Resnet-110型号的0.75 \%$ 0.94 \%$ 0.94 \%。模型大小和拖鞋减少了42.8 \%$和$ 47.4 \%$(for Resnet-56)和48.3 \%$ 48.3 \%$ 52.1 \%$(for resnet-110)。在ImageNet DataSet上,我们的方法可以分别达到40.8 \%$ 44.8 \%$ 74.8 \%$ 0.15 \%$ 0.15 \%$ 0.15美元的准确性。该代码可在https://github.com/eclipsess/chip_neurivs2021上获得。
translated by 谷歌翻译
鉴于从第一人称角度捕获的视频以及录制视频的环境环境,我们可以认识到该人在做什么并确定3D空间中的动作发生在哪里吗?我们解决了这个具有挑战性的问题,即在以自我为中心视频的已知3D地图上共同识别和本地化操作。为此,我们提出了一种新颖的深层概率模型。我们的模型采用了3D环境的层次体积表示(HVR)的输入和以自我为中心的视频,将3D Action位置视为潜在变量,并根据其潜在位置的视频和上下文提示识别动作。为了评估我们的模型,我们对EGO4D数据集的子集进行了广泛的实验,其中捕获了人类自然主义的作用和照片现实的3D环境重建。我们的方法证明了在可见和看不见的环境之间进行动作识别和3D动作定位的强劲结果。我们认为,我们的工作指向以自我为中心的视觉和3D场景理解的相交的令人兴奋的研究方向。
translated by 谷歌翻译
We introduce a new tool for stochastic convex optimization (SCO): a Reweighted Stochastic Query (ReSQue) estimator for the gradient of a function convolved with a (Gaussian) probability density. Combining ReSQue with recent advances in ball oracle acceleration [CJJJLST20, ACJJS21], we develop algorithms achieving state-of-the-art complexities for SCO in parallel and private settings. For a SCO objective constrained to the unit ball in $\mathbb{R}^d$, we obtain the following results (up to polylogarithmic factors). We give a parallel algorithm obtaining optimization error $\epsilon_{\text{opt}}$ with $d^{1/3}\epsilon_{\text{opt}}^{-2/3}$ gradient oracle query depth and $d^{1/3}\epsilon_{\text{opt}}^{-2/3} + \epsilon_{\text{opt}}^{-2}$ gradient queries in total, assuming access to a bounded-variance stochastic gradient estimator. For $\epsilon_{\text{opt}} \in [d^{-1}, d^{-1/4}]$, our algorithm matches the state-of-the-art oracle depth of [BJLLS19] while maintaining the optimal total work of stochastic gradient descent. We give an $(\epsilon_{\text{dp}}, \delta)$-differentially private algorithm which, given $n$ samples of Lipschitz loss functions, obtains near-optimal optimization error and makes $\min(n, n^2\epsilon_{\text{dp}}^2 d^{-1}) + \min(n^{4/3}\epsilon_{\text{dp}}^{1/3}, (nd)^{2/3}\epsilon_{\text{dp}}^{-1})$ queries to the gradients of these functions. In the regime $d \le n \epsilon_{\text{dp}}^{2}$, where privacy comes at no cost in terms of the optimal loss up to constants, our algorithm uses $n + (nd)^{2/3}\epsilon_{\text{dp}}^{-1}$ queries and improves recent advancements of [KLL21, AFKT21]. In the moderately low-dimensional setting $d \le \sqrt n \epsilon_{\text{dp}}^{3/2}$, our query complexity is near-linear.
translated by 谷歌翻译
New architecture GPUs like A100 are now equipped with multi-instance GPU (MIG) technology, which allows the GPU to be partitioned into multiple small, isolated instances. This technology provides more flexibility for users to support both deep learning training and inference workloads, but efficiently utilizing it can still be challenging. The vision of this paper is to provide a more comprehensive and practical benchmark study for MIG in order to eliminate the need for tedious manual benchmarking and tuning efforts. To achieve this vision, the paper presents MIGPerf, an open-source tool that streamlines the benchmark study for MIG. Using MIGPerf, the authors conduct a series of experiments, including deep learning training and inference characterization on MIG, GPU sharing characterization, and framework compatibility with MIG. The results of these experiments provide new insights and guidance for users to effectively employ MIG, and lay the foundation for further research on the orchestration of hybrid training and inference workloads on MIGs. The code and results are released on https://github.com/MLSysOps/MIGProfiler. This work is still in progress and more results will be published soon.
translated by 谷歌翻译